Skip to content

Added CI to run ceph tests on ns aws using minio #9186

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

achouhan09
Copy link
Member

@achouhan09 achouhan09 commented Aug 8, 2025

Describe the Problem

NooBaa Ceph S3 tests only supported regular testing on default backingstore. There was no way to test S3 functionality with namespace resources pointing to external S3-compatible storage.

Explain the Changes

  1. CI workflow - Added ceph-ns-aws-s3-tests.yaml for automated testing of namespace AWS functionality.
  2. Added Minio integration - New Makefile functions (run_minio, stop_minio) to deploy/remove minio container with bucket creation.
  3. Added setup_namespace_resource() function to create minio external connection and namespace resource on the basis of flag USE_S3_NAMESPACE_RESOURCE.
  4. Modified test scripts to conditionally use different test lists and log directories based on USE_S3_NAMESPACE_RESOURCE flag.
  5. Created ns_aws_s3_tests_pending_list.txt with 117 failed tests that need fixes for namespace AWS support.

GAP: We do not support creation of buckets via s3 api on the aws namespacestore for non nsfs, which is blocking the PR.

Issues: Fixed #xxx / Gap #xxx

  1. Fixed Gap: No test coverage for NooBaa namespace resources with external S3 backend.

Testing Instructions:

  1. To run the tests: make test-cephs3-ns-aws
  • Doc added/updated
  • Tests added

Summary by CodeRabbit

  • New Features

    • Added automated AWS S3-compatible namespace tests using MinIO and new namespace account configurations.
    • Added namespace-specific blacklist and pending test lists for finer test control.
  • Bug Fixes

    • Test scripts now select log directories and test lists dynamically based on namespace vs host-pool mode.
  • Chores

    • CI workflows updated to run the new namespace S3 tests automatically.

@achouhan09 achouhan09 requested review from jackyalbo, liranmauda, romayalon, a team and Neon-White and removed request for a team August 8, 2025 13:09
Copy link

coderabbitai bot commented Aug 8, 2025

Walkthrough

Adds a reusable GitHub Actions workflow and a PR job to run Ceph Namespace AWS S3 tests, introduces Makefile targets to run tests against MinIO, updates test setup to optionally use an S3 namespace resource (controlled by USE_S3_NAMESPACE_RESOURCE), and adds namespace-specific blacklist and pending test lists.

Changes

Cohort / File(s) Change Summary
GitHub Actions: new reusable workflow & integration
.github/workflows/ceph-ns-aws-s3-tests.yaml,
.github/workflows/run-pr-tests.yaml
Added a reusable workflow (workflow_call) that downloads/loads the noobaa-tester image and runs the Ceph NS AWS S3 tests; integrated it as a new ceph-ns-aws-s3-tests job in the PR tests requiring build-noobaa-image.
Makefile: MinIO S3 test support
Makefile
Added MINIO_* variables, run_minio / stop_minio helpers and a new phony target test-cephs3-ns-aws to run Ceph S3 namespace tests against a MinIO endpoint; lifecycle and network management included.
Test runner scripts: env-based routing & CORS flag
src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh
Export of CEPH_TEST_LOGS_DIR made conditional on USE_S3_NAMESPACE_RESOURCE; added CONFIG_JS_S3_CORS_DEFAULTS_ENABLED=false export and explanatory comments.
Test orchestration: choose test lists by mode
src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh
Choose blacklist/pending lists based on USE_S3_NAMESPACE_RESOURCE (namespace-specific vs generic lists).
Test setup: namespace resource creation + AWS S3 client usage
src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
Added setup_s3_namespace_resource to create a MinIO bucket, external connection and namespace resource using AWS SDK; ceph_test_setup may use the namespace resource as default_resource when enabled.
Test constants: NSFS account configs
src/test/external_tests/ceph_s3_tests/test_ceph_s3_constants.js
Added ns_aws_cephalt_account_config and ns_aws_cephtenant_account_config entries in CEPH_TEST for namespace-mode accounts.
Namespace-specific test lists
src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt,
src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_pending_list.txt
Added a large blacklist (437 entries) and a pending list (117 entries) specific to Ceph NS AWS S3 test runs; files are configuration-only (tests to skip/mark pending).

Sequence Diagram(s)

sequenceDiagram
    participant GH as GitHub Actions
    participant Artifact as Artifact Storage
    participant Runner as noobaa-tester (container)
    participant Make as Makefile / Test Runner
    participant MinIO as MinIO service
    participant Tests as Test scripts

    GH->>Artifact: download noobaa-tester artifact
    GH->>Runner: docker load image
    GH->>Runner: invoke test script (make test-cephs3-ns-aws)
    Runner->>Make: run_minio (start MinIO)
    Make->>MinIO: launch container and wait ready
    Runner->>Tests: set env (USE_S3_NAMESPACE_RESOURCE, MINIO_*, CEPH_TEST_LOGS_DIR, CORS flag)
    Tests->>MinIO: create bucket / configure connection (setup_s3_namespace_resource)
    Tests->>Make: run s3 test suite (uses blacklist/pending lists)
    Make->>MinIO: stop_minio (teardown)
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~15–25 minutes

Possibly related PRs

Suggested reviewers

  • nadavMiz
  • romayalon
  • alphaprinz
✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai generate unit tests to generate unit tests for this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

🧹 Nitpick comments (5)
src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt (1)

1-437: Document ownership and rationale per section (optional but helpful).

Consider grouping by feature (ACL, lifecycle, encryption, STS/ IAM) and adding a companion README mapping to tracking issues. Helps triage and gradual un-blacklisting.

I can generate a starter md with grouped sections and placeholders for issue links.

.github/workflows/ceph-ns-aws-s3-tests.yaml (2)

32-34: Avoid world-writable permissions on log directory

chmod 777 grants write access to all users inside the runner and any descendant containers. Limit to 755 (or 775 if group-writable is needed) to reduce risk.


25-27: Use -i flag for portability

docker load --input … is POSIX-compliant but less common; docker load -i /tmp/noobaa-tester.tar is clearer and mirrors Docker docs.

src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js (1)

122-130: Credentials logged in plain text

console.info() / console.log() after reading credentials may leak MINIO_* secrets in CI logs. Omit or redact these values.

Makefile (1)

366-377: Possible port clash & unnecessary host mapping

-p $(MINIO_PORT):$(MINIO_PORT) exposes MinIO on the host but tests interact via the Docker network. Omit -p to avoid host-level port collisions in parallel CI jobs.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5b2f75c and 808428d.

📒 Files selected for processing (8)
  • .github/workflows/ceph-ns-aws-s3-tests.yaml (1 hunks)
  • .github/workflows/run-pr-tests.yaml (1 hunks)
  • Makefile (3 hunks)
  • src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh (1 hunks)
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt (1 hunks)
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_pending_list.txt (1 hunks)
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh (1 hunks)
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
src/test/**/*.*

⚙️ CodeRabbit Configuration File

src/test/**/*.*: Ensure that the PR includes tests for the changes.

Files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_pending_list.txt
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
  • src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt
🧠 Learnings (6)
📚 Learning: 2025-06-19T11:03:49.934Z
Learnt from: nadavMiz
PR: noobaa/noobaa-core#9099
File: .github/workflows/run-pr-tests.yaml:135-136
Timestamp: 2025-06-19T11:03:49.934Z
Learning: In the noobaa-core project, the `make tester` command also calls `make noobaa` due to makefile target dependencies, so running `make tester` will build both the noobaa and tester images.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh
  • .github/workflows/run-pr-tests.yaml
  • .github/workflows/ceph-ns-aws-s3-tests.yaml
  • Makefile
📚 Learning: 2025-06-17T12:59:51.543Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9042
File: src/util/cloud_utils.js:183-194
Timestamp: 2025-06-17T12:59:51.543Z
Learning: In the set_noobaa_s3_connection function in src/util/cloud_utils.js, internal NooBaa S3 endpoints (filtered by 'api': 's3', 'kind': 'INTERNAL') will always be non-secure, so tls: false is the correct setting despite the conditional https:// scheme logic.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
📚 Learning: 2025-06-19T10:01:46.933Z
Learnt from: nadavMiz
PR: noobaa/noobaa-core#9099
File: .github/workflows/run-pr-tests.yaml:52-70
Timestamp: 2025-06-19T10:01:46.933Z
Learning: In the noobaa-core repository, branch names don't contain forward slashes, so Docker tag sanitization is not needed in GitHub Actions workflows. Manual full builds and automated builds should use consistent tag naming to avoid mismatches.

Applied to files:

  • .github/workflows/run-pr-tests.yaml
📚 Learning: 2025-08-08T13:12:46.718Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.718Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
📚 Learning: 2025-08-08T13:10:36.099Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.099Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
📚 Learning: 2025-08-08T13:13:42.168Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.168Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
🪛 YAMLlint (1.37.1)
.github/workflows/run-pr-tests.yaml

[warning] 36-36: wrong indentation: expected 6 but found 4

(indentation)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: ceph-nsfs-s3-tests / NSFS Ceph S3 Tests
  • GitHub Check: ceph-ns-aws-s3-tests / Ceph NS AWS S3 Tests
  • GitHub Check: ceph-s3-tests / Ceph S3 Tests
  • GitHub Check: run-nc-unit-tests / Non Containerized Unit Tests
  • GitHub Check: warp-tests / Warp Tests
  • GitHub Check: mint-nc-tests / Mint NC Tests
  • GitHub Check: run-unit-tests-postgres / Unit Tests with Postgres
  • GitHub Check: mint-tests / Mint Tests
  • GitHub Check: run-unit-tests / Unit Tests
  • GitHub Check: run-sanity-tests / Sanity Tests
  • GitHub Check: warp-nc-tests / Warp NC Tests
  • GitHub Check: run-sanity-ssl-tests / Sanity SSL Tests
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (2)
src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh (1)

37-43: Add CI upload step for namespace-resource logs

I didn’t find any references to /logs/ceph-ns-aws-test-logs or CEPH_TEST_LOGS_DIR in .github/workflows/ceph-ns-aws-s3-tests.yaml. Please update the workflow to ensure the new logs directory is uploaded when USE_NAMESPACE_RESOURCE=true:

  • Verify that the upload step (e.g., actions/upload-artifact) includes ${{ env.CEPH_TEST_LOGS_DIR }} or explicitly /logs/ceph-ns-aws-test-logs.
  • Confirm that artifacts are correctly collected under both log paths.
Makefile (1)

118-124: Pin container image tags

MINIO_IMAGE and AWS_CLI_IMAGE are unpinned (latest). CI reproducibility improves if you reference immutable digests or specific versions.

Makefile Outdated
Comment on lines 613 to 619
define run_minio
@echo "\033[1;34mRunning Minio container\033[0m"
$(CONTAINER_ENGINE) run -d $(CPUSET) --network noobaa-net --name coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX) -p $(MINIO_PORT):$(MINIO_PORT) --env "MINIO_ROOT_USER=$(MINIO_USER)" --env "MINIO_ROOT_PASSWORD=$(MINIO_PASSWORD)" $(MINIO_IMAGE) server /data --address ":$(MINIO_PORT)"
@echo "\033[1;34mWaiting for Minio to start..\033[0m"
sleep 20
@echo "\033[1;34mCreating test bucket in Minio..\033[0m"
$(CONTAINER_ENGINE) run --rm --network noobaa-net --env "AWS_ACCESS_KEY_ID=$(MINIO_USER)" --env "AWS_SECRET_ACCESS_KEY=$(MINIO_PASSWORD)" --env "AWS_DEFAULT_REGION=us-east-1" $(AWS_CLI_IMAGE) s3 mb s3://$(MINIO_TEST_BUCKET) --endpoint-url http://coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX):$(MINIO_PORT)
@echo "\033[1;34mVerifying bucket creation..\033[0m"
$(CONTAINER_ENGINE) run --rm --network noobaa-net --env "AWS_ACCESS_KEY_ID=$(MINIO_USER)" --env "AWS_SECRET_ACCESS_KEY=$(MINIO_PASSWORD)" --env "AWS_DEFAULT_REGION=us-east-1" $(AWS_CLI_IMAGE) s3 ls --endpoint-url http://coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX):$(MINIO_PORT)
@echo "\033[1;32mRun Minio done.\033[0m"
endef
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Fixed 20 s sleep may be flaky

Startup time for MinIO varies; replace the blind sleep 20 with a simple health-check loop to wait until the endpoint responds, reducing intermittent CI failures.

# Wait for MinIO readiness (max 30 s)
@for i in $$(seq 1 30); do \
  $(CONTAINER_ENGINE) run --rm --network noobaa-net \
    $(AWS_CLI_IMAGE) s3 ls --endpoint-url http://coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX):$(MINIO_PORT) >/dev/null 2>&1 && break; \
  sleep 1; \
done
🤖 Prompt for AI Agents
In the Makefile between lines 613 and 623, replace the fixed 20-second sleep
used to wait for Minio startup with a health-check loop that attempts to list
buckets on the Minio endpoint repeatedly for up to 30 seconds, breaking early if
successful. This involves removing the "sleep 20" line and adding a for-loop
that runs the AWS CLI s3 ls command against the Minio endpoint, sleeping 1
second between retries, to ensure Minio is ready before proceeding.

Comment on lines +47 to +50
test_realistic/TestFileValidator.py::test_new_file_is_valid_on_several_calls
test_realistic/TestFileValidator.py::test_new_file_is_valid_when_size_is_1
test_realistic/TestFileValidator.py::test_new_file_is_valid
test_realistic/TestFiles/test_random_file_valid
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Duplicate blacklist entries — prune for maintainability.

The TestFileValidator and TestFiles entries appear twice, increasing drift and review noise.

Apply this diff to remove the earlier duplicates:

-test_realistic/TestFileValidator.py::test_new_file_is_valid_on_several_calls
-test_realistic/TestFileValidator.py::test_new_file_is_valid_when_size_is_1
-test_realistic/TestFileValidator.py::test_new_file_is_valid
-test_realistic/TestFiles/test_random_file_valid

Detect all duplicates:

Also applies to: 286-289


🏁 Script executed:

#!/bin/bash
awk 'NF && $0 !~ /^#/ {cnt[$0]++} END{for (k in cnt) if (cnt[k]>1) print cnt[k] "x\t" k}' \
  src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt | sort -nr

Length of output: 458


Remove duplicate blacklist entries

The following entries each appear twice in src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt and should be pruned once (remove the first occurrence at lines 47–50):

  • test_realistic/TestFileValidator.py::test_new_file_is_valid_on_several_calls
  • test_realistic/TestFileValidator.py::test_new_file_is_valid_when_size_is_1
  • test_realistic/TestFileValidator.py::test_new_file_is_valid
  • test_realistic/TestFiles/test_random_file_valid

Proposed diff:

@@ -47,4 +47,0 @@
-test_realistic/TestFileValidator.py::test_new_file_is_valid_on_several_calls
-test_realistic/TestFileValidator.py::test_new_file_is_valid_when_size_is_1
-test_realistic/TestFileValidator.py::test_new_file_is_valid
-test_realistic/TestFiles/test_random_file_valid
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
test_realistic/TestFileValidator.py::test_new_file_is_valid_on_several_calls
test_realistic/TestFileValidator.py::test_new_file_is_valid_when_size_is_1
test_realistic/TestFileValidator.py::test_new_file_is_valid
test_realistic/TestFiles/test_random_file_valid
🤖 Prompt for AI Agents
In
src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt
at lines 47 to 50, there are duplicate blacklist entries for four test cases.
Remove the first occurrence of each duplicate entry in these lines to ensure
each test is listed only once in the blacklist file.

test_realistic/TestFileValidator.py::test_new_file_is_valid
test_realistic/TestFiles/test_random_file_valid
s3tests_boto3/functional/test_headers.py::test_object_create_bad_contentlength_none
s3tests_boto3/functional/test_headers.py::test_object_create_bad_contentlength_mismatch_abov
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Typo breaks blacklist entry (won’t match any test).

The test id seems truncated; it should end with “_above”.

Apply this fix:

-s3tests_boto3/functional/test_headers.py::test_object_create_bad_contentlength_mismatch_abov
+s3tests_boto3/functional/test_headers.py::test_object_create_bad_contentlength_mismatch_above

Run to confirm no other truncated identifiers:


🏁 Script executed:

#!/bin/bash
rg -n '::[A-Za-z0-9_]*abov($|[^e])' src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt || true

Length of output: 217


Fix truncated blacklist entry in ns_aws_s3_tests_black_list.txt

The test identifier on line 52 is missing the trailing “e” and won’t match the actual test name. A quick grep confirms this is the only occurrence.

Apply this change:

--- a/src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt
+++ b/src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt
@@ -52,1 +52,1 @@
-s3tests_boto3/functional/test_headers.py::test_object_create_bad_contentlength_mismatch_abov
+s3tests_boto3/functional/test_headers.py::test_object_create_bad_contentlength_mismatch_above
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
s3tests_boto3/functional/test_headers.py::test_object_create_bad_contentlength_mismatch_abov
s3tests_boto3/functional/test_headers.py::test_object_create_bad_contentlength_mismatch_above
🤖 Prompt for AI Agents
In
src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt
at line 52, the test identifier is truncated and missing the trailing "e".
Correct the test name by adding the missing "e" at the end to ensure it matches
the actual test name exactly.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Nitpick comments (6)
src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_pending_list.txt (1)

1-1: Add a brief header explaining the list’s purpose and maintenance policy

A one-line comment (e.g. starting with #) that states why these tests are pending and how/when they should be re-evaluated will help future maintainers keep the list current.

src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt (2)

11-12: Remove duplicate test names to avoid maintenance drift

test_logging_toggle appears more than once in this blacklist. Deduplicate the list before committing.

Also applies to: 184-185


1-1: Document why each test is black-listed

Insert a short file-level comment describing criteria for adding/removing tests and, optionally, a per-entry inline reason (same line after #). Improves traceability.

src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh (1)

39-43: Harden the flag check and quote the path

-if [ "${USE_NAMESPACE_RESOURCE}" = "true" ]; then
-    export CEPH_TEST_LOGS_DIR=/logs/ceph-ns-aws-test-logs
+if [ "${USE_NAMESPACE_RESOURCE:-false}" = "true" ]; then
+    export CEPH_TEST_LOGS_DIR="/logs/ceph-ns-aws-test-logs"
 else
-    export CEPH_TEST_LOGS_DIR=/logs/ceph-test-logs
+    export CEPH_TEST_LOGS_DIR="/logs/ceph-test-logs"
 fi

Prevents “unary operator expected” and handles paths with spaces.

.github/workflows/ceph-ns-aws-s3-tests.yaml (1)

29-35: Harden the test-execution shell snippet

Add strict bash flags so failures propagate and unset vars are caught:

-run: |
-  set -x
+run: |
+  set -euo pipefail
+  set -x

This prevents silent test or docker-command failures from being masked.

Makefile (1)

114-124: Pin images & avoid hard-coded credentials

  1. MINIO_IMAGE and AWS_CLI_IMAGE default to latest. Pin to a known tag to avoid sudden CI breaks.
  2. Even for test use, shipping default root credentials (miniotest/miniotest123) in-tree risks accidental reuse in non-test environments. Consider sourcing them exclusively from CI secrets.
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5b2f75c and 808428d.

📒 Files selected for processing (8)
  • .github/workflows/ceph-ns-aws-s3-tests.yaml (1 hunks)
  • .github/workflows/run-pr-tests.yaml (1 hunks)
  • Makefile (3 hunks)
  • src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh (1 hunks)
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt (1 hunks)
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_pending_list.txt (1 hunks)
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh (1 hunks)
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
src/test/**/*.*

⚙️ CodeRabbit Configuration File

src/test/**/*.*: Ensure that the PR includes tests for the changes.

Files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_pending_list.txt
  • src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
🧠 Learnings (6)
📚 Learning: 2025-06-19T11:03:49.934Z
Learnt from: nadavMiz
PR: noobaa/noobaa-core#9099
File: .github/workflows/run-pr-tests.yaml:135-136
Timestamp: 2025-06-19T11:03:49.934Z
Learning: In the noobaa-core project, the `make tester` command also calls `make noobaa` due to makefile target dependencies, so running `make tester` will build both the noobaa and tester images.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh
  • .github/workflows/run-pr-tests.yaml
  • Makefile
  • .github/workflows/ceph-ns-aws-s3-tests.yaml
📚 Learning: 2025-06-17T12:59:51.543Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9042
File: src/util/cloud_utils.js:183-194
Timestamp: 2025-06-17T12:59:51.543Z
Learning: In the set_noobaa_s3_connection function in src/util/cloud_utils.js, internal NooBaa S3 endpoints (filtered by 'api': 's3', 'kind': 'INTERNAL') will always be non-secure, so tls: false is the correct setting despite the conditional https:// scheme logic.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
📚 Learning: 2025-06-19T10:01:46.933Z
Learnt from: nadavMiz
PR: noobaa/noobaa-core#9099
File: .github/workflows/run-pr-tests.yaml:52-70
Timestamp: 2025-06-19T10:01:46.933Z
Learning: In the noobaa-core repository, branch names don't contain forward slashes, so Docker tag sanitization is not needed in GitHub Actions workflows. Manual full builds and automated builds should use consistent tag naming to avoid mismatches.

Applied to files:

  • .github/workflows/run-pr-tests.yaml
📚 Learning: 2025-08-08T13:12:46.718Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.718Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
📚 Learning: 2025-08-08T13:10:36.099Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.099Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
📚 Learning: 2025-08-08T13:13:42.168Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.168Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
🪛 YAMLlint (1.37.1)
.github/workflows/run-pr-tests.yaml

[warning] 36-36: wrong indentation: expected 6 but found 4

(indentation)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (10)
  • GitHub Check: ceph-nsfs-s3-tests / NSFS Ceph S3 Tests
  • GitHub Check: ceph-ns-aws-s3-tests / Ceph NS AWS S3 Tests
  • GitHub Check: ceph-s3-tests / Ceph S3 Tests
  • GitHub Check: run-nc-unit-tests / Non Containerized Unit Tests
  • GitHub Check: warp-tests / Warp Tests
  • GitHub Check: run-unit-tests-postgres / Unit Tests with Postgres
  • GitHub Check: mint-tests / Mint Tests
  • GitHub Check: run-unit-tests / Unit Tests
  • GitHub Check: warp-nc-tests / Warp NC Tests
  • GitHub Check: run-jest-unit-tests

Makefile Outdated
Comment on lines 366 to 377
test-cephs3-ns-aws: tester
@echo "\033[1;34mRunning ceph-s3 tests with namespace store pointing to s3 bucket in Minio\033[0m"
@$(call create_docker_network)
@$(call run_minio)
@$(call run_postgres)
@echo "\033[1;34mRunning tests\033[0m"
$(CONTAINER_ENGINE) run $(CPUSET) --privileged --user root --network noobaa-net --name noobaa_$(GIT_COMMIT)_$(NAME_POSTFIX) --env "SUPPRESS_LOGS=$(SUPPRESS_LOGS)" --env "POSTGRES_HOST=coretest-postgres-$(GIT_COMMIT)-$(NAME_POSTFIX)" --env "POSTGRES_USER=noobaa" --env "DB_TYPE=postgres" --env "POSTGRES_DBNAME=coretest" --env "USE_NAMESPACE_RESOURCE=true" --env "MINIO_ENDPOINT=http://coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX):$(MINIO_PORT)" --env "MINIO_USER=$(MINIO_USER)" --env "MINIO_PASSWORD=$(MINIO_PASSWORD)" --env "MINIO_TEST_BUCKET=$(MINIO_TEST_BUCKET)" -v $(PWD)/logs:/logs $(TESTER_TAG) "./src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh"
@$(call stop_noobaa)
@$(call stop_postgres)
@$(call stop_minio)
@$(call remove_docker_network)
.PHONY: test-cephs3-ns-aws
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Potential port clash & unnecessary host-port publishing

run_minio maps -p $(MINIO_PORT):$(MINIO_PORT) although every container is attached to the dedicated noobaa-net network and the tests reach MinIO by container name.
Omit the -p flag to:

  • avoid host-level port collisions when multiple jobs share a runner
  • speed-up network namespace setup
-$(CONTAINER_ENGINE) run -d $(CPUSET) --network noobaa-net --name coretest-minio-… -p $(MINIO_PORT):$(MINIO_PORT) …
+$(CONTAINER_ENGINE) run -d $(CPUSET) --network noobaa-net --name coretest-minio-… …

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In the Makefile around lines 366 to 377, the run_minio function currently
publishes the MinIO port to the host using the -p flag, which is unnecessary
since containers communicate over the noobaa-net network by container name.
Remove the -p $(MINIO_PORT):$(MINIO_PORT) option from the run_minio command to
prevent potential port conflicts on the host and to improve container startup
performance by avoiding host port mapping.

Makefile Outdated
Comment on lines 613 to 619
define run_minio
@echo "\033[1;34mRunning Minio container\033[0m"
$(CONTAINER_ENGINE) run -d $(CPUSET) --network noobaa-net --name coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX) -p $(MINIO_PORT):$(MINIO_PORT) --env "MINIO_ROOT_USER=$(MINIO_USER)" --env "MINIO_ROOT_PASSWORD=$(MINIO_PASSWORD)" $(MINIO_IMAGE) server /data --address ":$(MINIO_PORT)"
@echo "\033[1;34mWaiting for Minio to start..\033[0m"
sleep 20
@echo "\033[1;34mCreating test bucket in Minio..\033[0m"
$(CONTAINER_ENGINE) run --rm --network noobaa-net --env "AWS_ACCESS_KEY_ID=$(MINIO_USER)" --env "AWS_SECRET_ACCESS_KEY=$(MINIO_PASSWORD)" --env "AWS_DEFAULT_REGION=us-east-1" $(AWS_CLI_IMAGE) s3 mb s3://$(MINIO_TEST_BUCKET) --endpoint-url http://coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX):$(MINIO_PORT)
@echo "\033[1;34mVerifying bucket creation..\033[0m"
$(CONTAINER_ENGINE) run --rm --network noobaa-net --env "AWS_ACCESS_KEY_ID=$(MINIO_USER)" --env "AWS_SECRET_ACCESS_KEY=$(MINIO_PASSWORD)" --env "AWS_DEFAULT_REGION=us-east-1" $(AWS_CLI_IMAGE) s3 ls --endpoint-url http://coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX):$(MINIO_PORT)
@echo "\033[1;32mRun Minio done.\033[0m"
endef
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Race window: bucket creation can fail if MinIO is not ready

A fixed sleep 20 is brittle. Replace with a readiness loop that polls the health endpoint or waits for mc admin info/aws s3 ls to succeed, exiting on timeout.

for i in {1..15}; do
  if $(CONTAINER_ENGINE) logs coretest-minio-… 2>&1 | grep -q 'API: http'; then break; fi
  sleep 2
done

This will reduce flaky CI runs.

🤖 Prompt for AI Agents
In the Makefile around lines 613 to 623, replace the fixed "sleep 20" delay with
a readiness check loop that polls Minio's health by checking container logs or
running a command like "mc admin info" or "aws s3 ls" until it succeeds or a
timeout occurs. Implement a loop that retries the check every few seconds (e.g.,
2 seconds) up to a maximum number of attempts (e.g., 15), breaking early if
Minio is ready. This will ensure the bucket creation commands run only after
Minio is fully started, preventing race conditions and flaky CI runs.

Comment on lines 14 to 20
if [ "${USE_NAMESPACE_RESOURCE}" = "true" ]; then
S3_CEPH_TEST_BLACKLIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/ns_aws_s3_tests_black_list.txt"
S3_CEPH_TEST_PENDING_LIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/ns_aws_s3_tests_pending_list.txt"
else
S3_CEPH_TEST_BLACKLIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/s3_tests_black_list.txt"
S3_CEPH_TEST_PENDING_LIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/s3_tests_pending_list.txt"
fi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Guard against an unset USE_NAMESPACE_RESOURCE

If the variable is undefined the test will fall through to the generic lists, but the [ ] test will raise “unary operator expected” under set -u. Use parameter expansion to be safe:

-if [ "${USE_NAMESPACE_RESOURCE}" = "true" ]; then
+if [ "${USE_NAMESPACE_RESOURCE:-false}" = "true" ]; then
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if [ "${USE_NAMESPACE_RESOURCE}" = "true" ]; then
S3_CEPH_TEST_BLACKLIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/ns_aws_s3_tests_black_list.txt"
S3_CEPH_TEST_PENDING_LIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/ns_aws_s3_tests_pending_list.txt"
else
S3_CEPH_TEST_BLACKLIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/s3_tests_black_list.txt"
S3_CEPH_TEST_PENDING_LIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/s3_tests_pending_list.txt"
fi
if [ "${USE_NAMESPACE_RESOURCE:-false}" = "true" ]; then
S3_CEPH_TEST_BLACKLIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/ns_aws_s3_tests_black_list.txt"
S3_CEPH_TEST_PENDING_LIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/ns_aws_s3_tests_pending_list.txt"
else
S3_CEPH_TEST_BLACKLIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/s3_tests_black_list.txt"
S3_CEPH_TEST_PENDING_LIST="${CEPH_S3_DIRECTORY}/s3-tests-lists/s3_tests_pending_list.txt"
fi
🤖 Prompt for AI Agents
In src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh
around lines 14 to 20, the conditional test on USE_NAMESPACE_RESOURCE can cause
a "unary operator expected" error if the variable is unset. Fix this by using
parameter expansion to provide a default value, such as
"${USE_NAMESPACE_RESOURCE:-}", in the test expression to safely handle the case
when the variable is undefined.

Copy link
Contributor

@romayalon romayalon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@achouhan09
That's great, please don't forget to add this new CI/Tests capability to our documents.

export CEPH_TEST_LOGS_DIR=/logs/ceph-test-logs
# set logs directory based on test type
# later we will add more test types and logs directories on the basis of namespace resource type
if [ "${USE_NAMESPACE_RESOURCE}" = "true" ]; then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

USE_NAMESPACE_RESOURCE is not a good name IMHO, we aim to support namespace blob as well so this name is too general for S3 compatible case

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed it to USE_S3_NAMESPACE_RESOURCE. Thanks

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🔭 Outside diff range comments (1)
src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js (1)

87-100: Don’t swallow account creation errors blindly. Fail fast or only ignore “already exists”.

Current catch logs and proceeds, which can lead to opaque failures later if the accounts truly weren’t created.

Consider:

-    } catch (err) {
-        console.log("Failed to create account or tenant, assuming they were already created and continuing. ", err.message);
-    }
+    } catch (err) {
+        // Only continue if the error implies existing entities; otherwise fail fast.
+        if (!/already exists/i.test(err.message)) {
+            console.error('Account/tenant creation failed:', err.message);
+            throw err;
+        }
+        console.log('Account/tenant already exist. Continuing.');
+    }
♻️ Duplicate comments (1)
.github/workflows/run-pr-tests.yaml (1)

35-37: Fix YAML indentation (yamllint error).

Indent needs and uses two more spaces under ceph-ns-aws-s3-tests.

-  ceph-ns-aws-s3-tests:
-    needs: build-noobaa-image
-    uses: ./.github/workflows/ceph-ns-aws-s3-tests.yaml
+  ceph-ns-aws-s3-tests:
+      needs: build-noobaa-image
+      uses: ./.github/workflows/ceph-ns-aws-s3-tests.yaml
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 808428d and 95a5292.

📒 Files selected for processing (9)
  • .github/workflows/ceph-ns-aws-s3-tests.yaml (1 hunks)
  • .github/workflows/run-pr-tests.yaml (1 hunks)
  • Makefile (3 hunks)
  • src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh (1 hunks)
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt (1 hunks)
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_pending_list.txt (1 hunks)
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh (1 hunks)
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js (3 hunks)
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_constants.js (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (6)
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_black_list.txt
  • .github/workflows/ceph-ns-aws-s3-tests.yaml
  • src/test/external_tests/ceph_s3_tests/s3-tests-lists/ns_aws_s3_tests_pending_list.txt
  • src/test/external_tests/ceph_s3_tests/run_ceph_test_on_test_container.sh
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_and_run_s3_tests.sh
  • Makefile
🧰 Additional context used
📓 Path-based instructions (1)
src/test/**/*.*

⚙️ CodeRabbit Configuration File

src/test/**/*.*: Ensure that the PR includes tests for the changes.

Files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_constants.js
  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
🧠 Learnings (3)
📚 Learning: 2025-08-08T13:13:42.168Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1304-1307
Timestamp: 2025-08-08T13:13:42.168Z
Learning: In noobaa-core, do not rely on config.INTERNAL_STORAGE_POOL_NAME to detect backingstore pools. In src/server/system_services/pool_server.js, prefer checking pool.hosts_pool_info.backingstore or pool.cloud_pool_info.backingstore; fallback to prefix-matching the pool name against config.DEFAULT_POOL_NAME ('backingstores') to handle potential suffixes.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
📚 Learning: 2025-08-08T13:12:46.718Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js:9-17
Timestamp: 2025-08-08T13:12:46.718Z
Learning: In upgrade script src/upgrade/upgrade_scripts/5.20.0/remove_mongo_pool.js for noobaa-core, rely on structural detection (e.g., pool.mongo_info, and resource_type === 'INTERNAL') with name-prefix fallback for removing legacy mongo/internal pools, instead of depending solely on config.INTERNAL_STORAGE_POOL_NAME or config.DEFAULT_POOL_NAME. Handle multi-system stores and remove all matching pools in one change.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
📚 Learning: 2025-08-08T13:10:36.099Z
Learnt from: naveenpaul1
PR: noobaa/noobaa-core#9182
File: src/server/system_services/pool_server.js:1314-1317
Timestamp: 2025-08-08T13:10:36.099Z
Learning: In src/server/system_services/pool_server.js (and config usage), the constant config.INTERNAL_STORAGE_POOL_NAME has been removed from the system. Future logic should not depend on this constant and should instead use config.DEFAULT_POOL_NAME or structural markers (e.g., pool.resource_type === 'INTERNAL' or pool.mongo_info) to identify internal/mongo pools.

Applied to files:

  • src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js
🧬 Code Graph Analysis (1)
src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js (3)
src/test/external_tests/ceph_s3_tests/test_ceph_nsfs_s3_config_setup.js (2)
  • require (15-15)
  • require (16-16)
src/test/external_tests/ceph_s3_tests/test_ceph_s3.js (1)
  • require (18-18)
src/test/external_tests/ceph_s3_tests/test_ceph_s3_constants.js (1)
  • CEPH_TEST (11-57)
🪛 YAMLlint (1.37.1)
.github/workflows/run-pr-tests.yaml

[warning] 36-36: wrong indentation: expected 6 but found 4

(indentation)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-package-lock-validation
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (3)
.github/workflows/run-pr-tests.yaml (1)

35-37: Ensure the reusable workflow downloads the noobaa-image artifact.

The called workflow must download both artifacts produced here (noobaa-image and noobaa-tester). Add a download step for noobaa-image in .github/workflows/ceph-ns-aws-s3-tests.yaml.

Example step to add in ceph-ns-aws-s3-tests.yaml:

- uses: actions/download-artifact@v4
  with:
    name: noobaa-image
src/test/external_tests/ceph_s3_tests/test_ceph_s3_constants.js (1)

45-56: LGTM: namespace AWS account configs added.

Keys and fields align with nsfs_account_config usage introduced in setup.

src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js (1)

17-19: LGTM: imports for MinIO path.

aws-sdk and cloud_utils are correctly introduced for the MinIO-backed namespace setup.

Comment on lines 66 to +86
try {
const use_s3_namespace = process.env.USE_S3_NAMESPACE_RESOURCE === 'true';

const default_resource = use_s3_namespace ?
await setup_s3_namespace_resource() :
(() => {
// We are taking the first host pool, in normal k8s setup is default backing store
const test_pool = system.pools.filter(p => p.resource_type === 'HOSTS')[0];
console.log(test_pool);
return test_pool.name;
})();
console.log("default_resource: ", default_resource);

const account_config = use_s3_namespace ? {
cephalt: { nsfs_account_config: CEPH_TEST.ns_aws_cephalt_account_config },
cephtenant: { nsfs_account_config: CEPH_TEST.ns_aws_cephtenant_account_config }
} : {
cephalt: {},
cephtenant: {}
};

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Guard against missing HOSTS backingstore and avoid logging full pool objects.

Selecting the first HOSTS pool can break or pick the wrong pool; also avoid noisy console.log of the entire pool.

Use a safer selection that prefers the HOSTS pool marked as backingstore; fall back to name prefix. Also simplify the control flow and remove the debug print:

-        const default_resource = use_s3_namespace ?
-            await setup_s3_namespace_resource() :
-            (() => {
-                // We are taking the first host pool, in normal k8s setup is default backing store
-                const test_pool = system.pools.filter(p => p.resource_type === 'HOSTS')[0];
-                console.log(test_pool);
-                return test_pool.name;
-            })();
+        let default_resource;
+        if (use_s3_namespace) {
+            default_resource = await setup_s3_namespace_resource();
+        } else {
+            const host_pools = system.pools.filter(p => p.resource_type === 'HOSTS');
+            const backingstore_pool =
+                host_pools.find(p => p.hosts_pool_info?.backingstore) ||
+                host_pools.find(p => typeof p.name === 'string' && p.name.startsWith('backingstores'));
+            if (!backingstore_pool) {
+                throw new Error('No HOSTS backingstore pool found – cannot determine default resource');
+            }
+            default_resource = backingstore_pool.name;
+        }
         console.log("default_resource: ", default_resource);

Reference: recent internal guidance discourages relying on fixed names; prefer structural markers like hosts_pool_info.backingstore with a name-prefix fallback.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try {
const use_s3_namespace = process.env.USE_S3_NAMESPACE_RESOURCE === 'true';
const default_resource = use_s3_namespace ?
await setup_s3_namespace_resource() :
(() => {
// We are taking the first host pool, in normal k8s setup is default backing store
const test_pool = system.pools.filter(p => p.resource_type === 'HOSTS')[0];
console.log(test_pool);
return test_pool.name;
})();
console.log("default_resource: ", default_resource);
const account_config = use_s3_namespace ? {
cephalt: { nsfs_account_config: CEPH_TEST.ns_aws_cephalt_account_config },
cephtenant: { nsfs_account_config: CEPH_TEST.ns_aws_cephtenant_account_config }
} : {
cephalt: {},
cephtenant: {}
};
try {
const use_s3_namespace = process.env.USE_S3_NAMESPACE_RESOURCE === 'true';
let default_resource;
if (use_s3_namespace) {
default_resource = await setup_s3_namespace_resource();
} else {
const host_pools = system.pools.filter(p => p.resource_type === 'HOSTS');
const backingstore_pool =
host_pools.find(p => p.hosts_pool_info?.backingstore) ||
host_pools.find(p => typeof p.name === 'string' && p.name.startsWith('backingstores'));
if (!backingstore_pool) {
throw new Error('No HOSTS backingstore pool found – cannot determine default resource');
}
default_resource = backingstore_pool.name;
}
console.log("default_resource: ", default_resource);
const account_config = use_s3_namespace ? {
cephalt: { nsfs_account_config: CEPH_TEST.ns_aws_cephalt_account_config },
cephtenant: { nsfs_account_config: CEPH_TEST.ns_aws_cephtenant_account_config }
} : {
cephalt: {},
cephtenant: {}
};

Comment on lines +128 to +179
async function setup_s3_namespace_resource() {
const minio_endpoint = process.env.MINIO_ENDPOINT;
const minio_user = process.env.MINIO_USER;
const minio_password = process.env.MINIO_PASSWORD;
const minio_test_bucket = process.env.MINIO_TEST_BUCKET;
const namespace_resource_name = "ns-aws";

// create the bucket in Minio using AWS SDK
console.info(`Creating bucket ${minio_test_bucket} in Minio...`);
try {
const s3 = new AWS.S3({
endpoint: minio_endpoint,
accessKeyId: minio_user,
secretAccessKey: minio_password,
s3ForcePathStyle: true,
signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(minio_endpoint)
});

await s3.createBucket({ Bucket: minio_test_bucket }).promise();
console.log(`Bucket ${minio_test_bucket} created successfully in Minio`);
} catch (err) {
throw new Error(`Bucket creation failed: ${err.message}`);
}

console.info('Creating external connection...');
try {
await client.account.add_external_connection({
name: "minio-connection",
endpoint: minio_endpoint,
endpoint_type: "S3_COMPATIBLE",
identity: minio_user,
secret: minio_password
});
console.log('External connection created successfully');
} catch (err) {
throw new Error(`External connection creation failed: ${err.message}`);
}

console.info('Creating namespace resource...');
try {
await client.pool.create_namespace_resource({
name: namespace_resource_name,
connection: "minio-connection",
target_bucket: minio_test_bucket
});
console.log('Namespace resource created successfully');
} catch (err) {
throw new Error(`Namespace resource creation failed: ${err.message}`);
}

return namespace_resource_name;
}
Copy link

@coderabbitai coderabbitai bot Aug 11, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Make setup_s3_namespace_resource idempotent and validate required env vars.

Improve robustness: validate env vars up front; tolerate pre-existing bucket/connection/resource; optionally wait for resource readiness.

 async function setup_s3_namespace_resource() {
     const minio_endpoint = process.env.MINIO_ENDPOINT;
     const minio_user = process.env.MINIO_USER;
     const minio_password = process.env.MINIO_PASSWORD;
     const minio_test_bucket = process.env.MINIO_TEST_BUCKET;
     const namespace_resource_name = "ns-aws";
 
+    // Validate required env vars
+    const required = {
+        MINIO_ENDPOINT: minio_endpoint,
+        MINIO_USER: minio_user,
+        MINIO_PASSWORD: minio_password,
+        MINIO_TEST_BUCKET: minio_test_bucket,
+    };
+    for (const [k, v] of Object.entries(required)) {
+        if (!v) throw new Error(`Missing required env var: ${k}`);
+    }
+
     // create the bucket in Minio using AWS SDK
     console.info(`Creating bucket ${minio_test_bucket} in Minio...`);
     try {
         const s3 = new AWS.S3({
             endpoint: minio_endpoint,
             accessKeyId: minio_user,
             secretAccessKey: minio_password,
             s3ForcePathStyle: true,
             signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(minio_endpoint)
         });
 
-        await s3.createBucket({ Bucket: minio_test_bucket }).promise();
-        console.log(`Bucket ${minio_test_bucket} created successfully in Minio`);
+        // Be idempotent: create if missing, else continue
+        try {
+            await s3.headBucket({ Bucket: minio_test_bucket }).promise();
+            console.log(`Bucket ${minio_test_bucket} already exists in Minio`);
+        } catch (e) {
+            if (e.statusCode === 404 || e.code === 'NotFound' || e.code === 'NoSuchBucket') {
+                await s3.createBucket({ Bucket: minio_test_bucket }).promise();
+                console.log(`Bucket ${minio_test_bucket} created successfully in Minio`);
+            } else {
+                throw e;
+            }
+        }
     } catch (err) {
         throw new Error(`Bucket creation failed: ${err.message}`);
     }
 
     console.info('Creating external connection...');
     try {
         await client.account.add_external_connection({
             name: "minio-connection",
             endpoint: minio_endpoint,
             endpoint_type: "S3_COMPATIBLE",
             identity: minio_user,
             secret: minio_password
         });
         console.log('External connection created successfully');
     } catch (err) {
-        throw new Error(`External connection creation failed: ${err.message}`);
+        // Allow reruns
+        if (!/already exists/i.test(err.message)) {
+            throw new Error(`External connection creation failed: ${err.message}`);
+        }
+        console.log('External connection already exists. Continuing.');
     }
 
     console.info('Creating namespace resource...');
     try {
         await client.pool.create_namespace_resource({
             name: namespace_resource_name,
             connection: "minio-connection",
             target_bucket: minio_test_bucket
         });
         console.log('Namespace resource created successfully');
     } catch (err) {
-        throw new Error(`Namespace resource creation failed: ${err.message}`);
+        // Allow reruns
+        if (!/already exists/i.test(err.message)) {
+            throw new Error(`Namespace resource creation failed: ${err.message}`);
+        }
+        console.log('Namespace resource already exists. Continuing.');
     }
 
     return namespace_resource_name;
 }

Optionally, poll for readiness after creation and before returning:

// Optional: wait until OPTIMAL
// const waitMs = 2000, tries = 15;
// for (let i = 0; i < tries; i++) {
//   const res = await client.pool.read_pool({ name: namespace_resource_name });
//   if (res?.monitoring?.state === 'OPTIMAL') break;
//   await new Promise(r => setTimeout(r, waitMs));
// }
🤖 Prompt for AI Agents
In src/test/external_tests/ceph_s3_tests/test_ceph_s3_config_setup.js around
lines 128-179, make setup_s3_namespace_resource validate required env vars up
front and be idempotent: check that MINIO_ENDPOINT, MINIO_USER, MINIO_PASSWORD
and MINIO_TEST_BUCKET exist and throw a clear error early if missing; when
creating the MinIO bucket, first check if it exists (or call headBucket) and
only create if absent, or catch and ignore the
"BucketAlreadyOwnedByYou"/existing-bucket error; when adding the external
connection, check for an existing connection by name (or catch and ignore the
duplicate-name error) and reuse it; when creating the namespace resource, check
for existence (read or list namespace resources) and skip creation if present
(or catch and ignore the already-exists error); optionally implement a small
polling loop after creation that calls client.pool.read_pool(name) and waits
until res.monitoring.state === 'OPTIMAL' or a timeout is reached, returning the
namespace name after readiness or after it already exists.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+!
Also, please consider extracting minio_connection to a constant

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Neon-White Thank you for the approval! You're absolutely right about extracting the connection name to a constant. Here's the updated code with the MINIO_CONNECTION_NAME constant:

async function setup_s3_namespace_resource() {
    const minio_endpoint = process.env.MINIO_ENDPOINT;
    const minio_user = process.env.MINIO_USER;
    const minio_password = process.env.MINIO_PASSWORD;
    const minio_test_bucket = process.env.MINIO_TEST_BUCKET;
    const namespace_resource_name = "ns-aws";
    const MINIO_CONNECTION_NAME = "minio-connection";

    // Validate required env vars
    const required = {
        MINIO_ENDPOINT: minio_endpoint,
        MINIO_USER: minio_user,
        MINIO_PASSWORD: minio_password,
        MINIO_TEST_BUCKET: minio_test_bucket,
    };
    for (const [k, v] of Object.entries(required)) {
        if (!v) throw new Error(`Missing required env var: ${k}`);
    }

    // create the bucket in Minio using AWS SDK
    console.info(`Creating bucket ${minio_test_bucket} in Minio...`);
    try {
        const s3 = new AWS.S3({
            endpoint: minio_endpoint,
            accessKeyId: minio_user,
            secretAccessKey: minio_password,
            s3ForcePathStyle: true,
            signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(minio_endpoint)
        });

        // Be idempotent: create if missing, else continue
        try {
            await s3.headBucket({ Bucket: minio_test_bucket }).promise();
            console.log(`Bucket ${minio_test_bucket} already exists in Minio`);
        } catch (e) {
            if (e.statusCode === 404 || e.code === 'NotFound' || e.code === 'NoSuchBucket') {
                await s3.createBucket({ Bucket: minio_test_bucket }).promise();
                console.log(`Bucket ${minio_test_bucket} created successfully in Minio`);
            } else {
                throw e;
            }
        }
    } catch (err) {
        throw new Error(`Bucket creation failed: ${err.message}`);
    }

    console.info('Creating external connection...');
    try {
        await client.account.add_external_connection({
            name: MINIO_CONNECTION_NAME,
            endpoint: minio_endpoint,
            endpoint_type: "S3_COMPATIBLE",
            identity: minio_user,
            secret: minio_password
        });
        console.log('External connection created successfully');
    } catch (err) {
        // Allow reruns
        if (!/already exists/i.test(err.message)) {
            throw new Error(`External connection creation failed: ${err.message}`);
        }
        console.log('External connection already exists. Continuing.');
    }

    console.info('Creating namespace resource...');
    try {
        await client.pool.create_namespace_resource({
            name: namespace_resource_name,
            connection: MINIO_CONNECTION_NAME,
            target_bucket: minio_test_bucket
        });
        console.log('Namespace resource created successfully');
    } catch (err) {
        // Allow reruns
        if (!/already exists/i.test(err.message)) {
            throw new Error(`Namespace resource creation failed: ${err.message}`);
        }
        console.log('Namespace resource already exists. Continuing.');
    }

    return namespace_resource_name;
}

The constant extraction eliminates the magic string and makes the connection name easily configurable in one place.

@achouhan09
Copy link
Member Author

Currently, we do not support the creation of buckets via the S3 API on AWS namespace stores for non-NSFS setup. This is a key requirement for ceph tests, as most of these tests attempt to create buckets using noobaa account credentials through the S3 API. Because of this, 291 Ceph tests are failing. Therefore, we cannot proceed with this PR until this feature is supported.
And the same things applies to other types of namespacestore.

cc @liranmauda @jackyalbo @Neon-White

Copy link
Contributor

@Neon-White Neon-White left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good job :)
Bucket creation aside, a few minor comments.
Also, could you please write/generate some relevant docs and put them under CI & Tests?

@echo "\033[1;34mRunning Minio container\033[0m"
$(CONTAINER_ENGINE) run -d $(CPUSET) --network noobaa-net --name coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX) -p $(MINIO_PORT):$(MINIO_PORT) --env "MINIO_ROOT_USER=$(MINIO_USER)" --env "MINIO_ROOT_PASSWORD=$(MINIO_PASSWORD)" $(MINIO_IMAGE) server /data --address ":$(MINIO_PORT)"
@echo "\033[1;34mWaiting for Minio to start..\033[0m"
sleep 20
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The time period (20 seconds) might be both too short as well as too long. I'd recommend to implement a proper health check with periodic (1 second~) checks instead

$(CONTAINER_ENGINE) run -d $(CPUSET) --network noobaa-net --name coretest-minio-$(GIT_COMMIT)-$(NAME_POSTFIX) -p $(MINIO_PORT):$(MINIO_PORT) --env "MINIO_ROOT_USER=$(MINIO_USER)" --env "MINIO_ROOT_PASSWORD=$(MINIO_PASSWORD)" $(MINIO_IMAGE) server /data --address ":$(MINIO_PORT)"
@echo "\033[1;34mWaiting for Minio to start..\033[0m"
sleep 20
@echo "\033[1;32mRun Minio done.\033[0m"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if the startup fails? It'd be good to add health checks and proper error handling (possibly with retry logic)

Comment on lines +128 to +179
async function setup_s3_namespace_resource() {
const minio_endpoint = process.env.MINIO_ENDPOINT;
const minio_user = process.env.MINIO_USER;
const minio_password = process.env.MINIO_PASSWORD;
const minio_test_bucket = process.env.MINIO_TEST_BUCKET;
const namespace_resource_name = "ns-aws";

// create the bucket in Minio using AWS SDK
console.info(`Creating bucket ${minio_test_bucket} in Minio...`);
try {
const s3 = new AWS.S3({
endpoint: minio_endpoint,
accessKeyId: minio_user,
secretAccessKey: minio_password,
s3ForcePathStyle: true,
signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(minio_endpoint)
});

await s3.createBucket({ Bucket: minio_test_bucket }).promise();
console.log(`Bucket ${minio_test_bucket} created successfully in Minio`);
} catch (err) {
throw new Error(`Bucket creation failed: ${err.message}`);
}

console.info('Creating external connection...');
try {
await client.account.add_external_connection({
name: "minio-connection",
endpoint: minio_endpoint,
endpoint_type: "S3_COMPATIBLE",
identity: minio_user,
secret: minio_password
});
console.log('External connection created successfully');
} catch (err) {
throw new Error(`External connection creation failed: ${err.message}`);
}

console.info('Creating namespace resource...');
try {
await client.pool.create_namespace_resource({
name: namespace_resource_name,
connection: "minio-connection",
target_bucket: minio_test_bucket
});
console.log('Namespace resource created successfully');
} catch (err) {
throw new Error(`Namespace resource creation failed: ${err.message}`);
}

return namespace_resource_name;
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+!
Also, please consider extracting minio_connection to a constant

Comment on lines +373 to +376
@$(call stop_noobaa)
@$(call stop_postgres)
@$(call stop_minio)
@$(call remove_docker_network)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If a previous step fails, I believe it causes the entire make to fail, which means those cleanup steps will not run.
We need to make sure that the cleanup runs even if we encounter unexpected, fatal errors

await s3.createBucket({ Bucket: minio_test_bucket }).promise();
console.log(`Bucket ${minio_test_bucket} created successfully in Minio`);
} catch (err) {
throw new Error(`Bucket creation failed: ${err.message}`);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kind of error handling (throughout the PR) 'swallows' the error context and stack trace, I think it can be valuable to preserve them -

Suggested change
throw new Error(`Bucket creation failed: ${err.message}`);
err.message = `Bucket creation failed: ${err.message}`;
throw err; // Re-throw original error with modified message
}

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants